skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Wang, Zilong"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Heterologous expression of polyketide synthase (PKS) genes inEscherichia colihas enabled the production of various valuable natural and synthetic products. However, the limited availability of malonyl-CoA (M-CoA) inE. coliremains a substantial impediment to high-titer polyketide production. Here we address this limitation by disrupting the native M-CoA biosynthetic pathway and introducing an orthogonal pathway comprising a malonate transporter and M-CoA ligase, enabling efficient M-CoA biosynthesis under malonate supplementation. This approach substantially increases M-CoA levels, enhancing fatty acid and polyketide titers while reducing the promiscuous activity of PKSs toward undesired acyl-CoA substrates. Subsequent adaptive laboratory evolution of these strains provides insights into M-CoA regulation and identifies mutations that further boost M-CoA and polyketide production. This strategy improvesE. colias a host for polyketide biosynthesis and advances understanding of M-CoA metabolism in microbial systems. 
    more » « less
  2. Free, publicly-accessible full text available November 12, 2025
  3. Free, publicly-accessible full text available April 6, 2026
  4. Fine-tuning pre-trained language models is a common practice in building NLP models for various tasks, including the case with less supervision. We argue that under the few-shot setting, formulating fine-tuning closer to the pre-training objective shall be able to unleash more benefits from the pre-trained language models. In this work, we take few-shot named entity recognition (NER) for a pilot study, where existing fine-tuning strategies are much different from pre-training. We propose a novel few-shot fine-tuning framework for NER, FFF-NER. Specifically, we introduce three new types of tokens, “is-entity”, “which-type” and “bracket”, so we can formulate the NER fine-tuning as (masked) token prediction or generation, depending on the choice of the pre-training objective. In our experiments, we apply to fine-tune both BERT and BART for few-shot NER on several benchmark datasets and observe significant improvements over existing fine-tuning strategies, including sequence labeling, prototype meta-learning, and prompt-based approaches. We further perform a series of ablation studies, showing few-shot NER performance is strongly correlated with the similarity between fine-tuning and pre-training. 
    more » « less